On Sparse Associative Networks: A Least Squares Formulation

نویسنده

  • Björn Johansson
چکیده

This report is a complement to the working document [4], where a sparse associative network is described. This report shows that the net learning rule in [4] can be viewed as the solution to a weighted least squares problem. This means that we can apply the theory framework of least squares problems, and compare the net rule with some other iterative algorithms that solve the same problem. The learning rule is compared with the gradient search algorithm and the RPROP algorithm in a simple synthetic experiment. The gradient rule has the slowest convergence while the associative and the RPROP rules have similar convergence. The associative learning rule has a smaller initial error than the RPROP rule though. It is also shown in the same experiment that we get a faster convergence if we have a monopolar constraint on the solution, i.e. if the solution is constrained to be non-negative. The least squares error is a bit higher but the norm of the solution is smaller, which gives a smaller interpolation error. The report also discusses a generalization of the least squares model, which include other known function approximation models.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse kernel learning with LASSO and Bayesian inference algorithm

Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In Internationa...

متن کامل

Least Squares Support Vector Machines and Primal Space Estimation

In this paper a methodology for estimation in kernel-induced feature spaces is presented, making a link between the primal-dual formulation of Least Squares Support Vector Machines (LS-SVM) and classical statistical inference techniques in order to perform linear regression in primal space. This is done by computing a finite dimensional approximation of the kernel-induced feature space mapping ...

متن کامل

Synthesis of a nonrecurrent associative memory model based on a nonlinear transformation in the spectral domain

A new nonrecurrent associative memory model is proposed. This model is composed of a nonlinear transformation in the spectral domain followed by the association. The Moore-Penrose pseudoinverse is employed to obtain the least squares optimal solution. Computer simulations are done to evaluate the performance of the model. The simulations use one-dimensional speech signals and two-dimensional he...

متن کامل

Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis

MOTIVATION Many practical pattern recognition problems require non-negativity constraints. For example, pixels in digital images and chemical concentrations in bioinformatics are non-negative. Sparse non-negative matrix factorizations (NMFs) are useful when the degree of sparseness in the non-negative basis matrix or the non-negative coefficient matrix in an NMF needs to be controlled in approx...

متن کامل

Adaptive and Weighted Collaborative Representations for image classification

Recently, (Zhang et al., 2011) proposed a classifier based on collaborative representations (CR) with regularized least squares (CRC-RLS) for image face recognition. CRC-RLS can replace sparse representation (SR) based classification (SRC) as a simple and fast alternative. With SR resulting from an l1-regularized least squares decomposition, CR starts from an l2-regularized least squares formul...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001